57 research outputs found

    A New Approach for Broken Bar Fault Detection in Three-Phase Induction Motor Using Instantaneous Power Monitoring under Low Slip Range

    Get PDF
    The majority of research is about the detection of broken bar fault in three-phase induction motor at high slip. Thus, it becomes very interesting and demanding to detect faults in case of low slip range. In this study, a novel investigation of broken bar faults using instantaneous power is presented. The method is based on calculations and frequency analysis of partial and total instantaneous power under low slip range. The used model of squirrel cage induction machine takes into account the geometry and winding layout. This model will be used to analyze the impact of broken bar on instantaneous power spectrum. The proposed ideas in this paper are verified experimentally. The results show that broken bar fault can be more reliably detected under low slip range when using a large frequency area of both partial and total instantaneous power spectrums.DOI:http://dx.doi.org/10.11591/ijece.v4i1.461

    Design Choices for X-vector Based Speaker Anonymization

    Get PDF
    The recently proposed x-vector based anonymization scheme converts any input voice into that of a random pseudo-speaker. In this paper, we present a flexible pseudo-speaker selection technique as a baseline for the first VoicePrivacy Challenge. We explore several design choices for the distance metric between speakers, the region of x-vector space where the pseudo-speaker is picked, and gender selection. To assess the strength of anonymization achieved, we consider attackers using an x-vector based speaker verification system who may use original or anonymized speech for enrollment, depending on their knowledge of the anonymization scheme. The Equal Error Rate (EER) achieved by the attackers and the decoding Word Error Rate (WER) over anonymized data are reported as the measures of privacy and utility. Experiments are performed using datasets derived from LibriSpeech to find the optimal combination of design choices in terms of privacy and utility

    ACCIO: How to Make Location Privacy Experimentation Open and Easy

    Get PDF
    International audienceThe advent of mobile applications collecting and exploiting the location of users opens a number of privacy threats. To mitigate these privacy issues, several protection mechanisms have been proposed this last decade to protect users' location privacy. However, these protection mechanisms are usually implemented and evaluated in monolithic way, with heterogeneous tools and languages. Moreover, they are evaluated using different methodologies, metrics and datasets. This lack of standard makes the task of evaluating and comparing protection mechanisms particularly hard. In this paper, we present ACCIO, a unified framework to ease the design and evaluation of protection mechanisms. Thanks to its Domain Specific Language, ACCIO allows researchers and practitioners to define and deploy experiments in an intuitive way, as well as to easily collect and analyse the results. ACCIO already comes with several state-of-the-art protection mechanisms and a toolbox to manipulate mobility data. Finally, ACCIO is open and easily extensible with new evaluation metrics and protection mechanisms. This openness, combined with a description of experiments through a user-friendly DSL, makes ACCIO an appealing tool to reproduce and disseminate research results easier. In this paper, we present ACCIO's motivation and architecture, and demonstrate its capabilities through several use cases involving multiples metrics, state-of-the-art protection mechanisms, and two real-life mobility datasets collected in Beijing and in the San Francisco area

    A comparative study of speech anonymization metrics

    Get PDF
    International audienceSpeech anonymization techniques have recently been proposed for preserving speakers' privacy. They aim at concealing speak-ers' identities while preserving the spoken content. In this study, we compare three metrics proposed in the literature to assess the level of privacy achieved. We exhibit through simulation the differences and blindspots of some metrics. In addition, we conduct experiments on real data and state-of-the-art anonymiza-tion techniques to study how they behave in a practical scenario. We show that the application-independent log-likelihood-ratio cost function C min llr provides a more robust evaluation of privacy than the equal error rate (EER), and that detection-based metrics provide different information from linkability metrics. Interestingly , the results on real data indicate that current anonymiza-tion design choices do not induce a regime where the differences between those metrics become apparent

    Enhancing speech privacy with slicing

    Get PDF
    International audiencePrivacy preservation calls for speech anonymization methods which hide the speaker's identity while minimizing the impact on downstream tasks such as automatic speech recognition (ASR) training or decoding. In the recent VoicePrivacy 2020 Challenge, several anonymization methods have been proposed to transform speech utterances in a way that preserves their verbal and prosodic contents while reducing the accuracy of a speaker verification system. In this paper, we propose to further increase the privacy achieved by such methods by segmenting the utterances into shorter slices. We show that our approach has two major impacts on privacy. First, it reduces the accuracy of speaker verification with respect to unsegmented utterances. Second, it also reduces the amount of personal information that can be extracted from the verbal content, in a way that cannot easily be reversed by an attacker. We also show that it is possible to train an ASR system from anonymized speech slices with negligible impact on the word error rate

    Design Choices for X-vector Based Speaker Anonymization

    Get PDF
    International audienceThe recently proposed x-vector based anonymization scheme converts any input voice into that of a random pseudo-speaker. In this paper, we present a flexible pseudo-speaker selection technique as a baseline for the first VoicePrivacy Challenge. We explore several design choices for the distance metric between speakers, the region of x-vector space where the pseudo-speaker is picked, and gender selection. To assess the strength of anonymization achieved, we consider attackers using an x-vector based speaker verification system who may use original or anonymized speech for enrollment, depending on their knowledge of the anonymization scheme. The Equal Error Rate (EER) achieved by the attackers and the decoding Word Error Rate (WER) over anonymized data are reported as the measures of privacy and utility. Experiments are performed using datasets derived from LibriSpeech to find the optimal combination of design choices in terms of privacy and utility

    Privacy and utility of x-vector based speaker anonymization

    Get PDF
    International audienceWe study the scenario where individuals (speakers) contribute to the publication of an anonymized speech corpus. Data users then leverage this public corpus to perform downstream tasks (such as training automatic speech recognition systems), while attackers may try to de-anonymize itbased on auxiliary knowledge they collect. Motivated by this scenario, speaker anonymization aims to conceal the speaker identity while preserving the quality and usefulness of speech data. In this paper, we study x-vector based speaker anonymization, the leading approach in the recent Voice Privacy Challenge, which converts an input utterance into that of a random pseudo-speaker. We show that the strength of the anonymization varies significantly depending on how the pseudo-speaker is selected. In particular, we investigate four design choices: the distance measure between speakers, the region of x-vector space where the pseudo-speaker is mapped, the gender selection and whether to use speaker or utterance level assignment. We assess the quality of anonymization from the perspective of the three actors involved in our threat model, namely the speaker, the user and the attacker. To measure privacy and utility, we use respectively the linkability score achieved by the attackers and the decoding word error rate incurred by an ASR model trained with the anonymized data. Experiments on LibriSpeech dataset confirm that the optimal combination ofdesign choices yield state-of-the-art performance in terms of privacy protection as well as utility. Experiments on Mozilla Common Voice dataset show that the best design choices with 50 speakers guarantee the same anonymization level against re-identification attack as raw speech with 20,000 speakers

    Préservation contre les attaques de ré-identification sur des données de mobilité

    No full text
    With the wide propagation of handheld devices, more and more mobile sensors are being used by end users on a daily basis. Those sensors could be leveraged to gather useful mobility data for city planners, business analysts and researches. However, gathering and exploiting mobility data raises many privacy threats. Sensitive information such as one’s home or workplace, hobbies, religious beliefs, political or sexual preferences can be inferred from the gathered data. In the last decade, Location Privacy Protection Mechanisms (LPPMs) have been proposed to protect user data privacy. They alter data mobility to enforce formal guarantees (e.g., k-anonymity or differential privacy), hide sensitive information (e.g., erase points of interests) or act as countermeasures for particular attacks. In this thesis, we focus on the threat of re-identification which aims at re-linking an anonymous mobility trace to the know past mobility of its user. First, we propose re-identification attacks (AP-Attack and ILL-Attack) that find vulnerabilities and stress current state-of-the-art LPPMs to quantify their effectiveness. We also propose a new protection mechanism HMC that uses heat maps to guide the transformation of mobility data to change the behaviour of a user, in order to make her look similar to someone else rather than her past self which preserves her from re-identification attacks. This alteration of mobility trace is constrained with the control of the utility of the data to minimize the distortion in the quality of the analysis realized on this data.De nos jours, avec la large propagation de différents appareils mobiles, de nombreux capteurs accompagnent des utilisateurs. Ces capteurs peuvent servir à collecter des données de mobilité qui sont utiles pour des urbanistes ou des chercheurs. Cependant, l'exploitation de ces données soulèvent de nombreuses menaces quant à la préservation de la vie privée des utilisateurs. En effet, des informations sensibles tel que le lieu domicile, le lieu de travail ou même les croyances religieuses peuvent être inférées de ces données. Durant la dernière décennie, des mécanismes de protections appelées "Location Privacy Protection Mechanisms (LPPM)" ont été proposé. Ils imposent des guarenties sur les données (e.g., k-anonymity ou differential privacy), obfusquent les informations sensibles (e.g., efface les points d'intéret) ou sont une contre-mesure à des attaques particulières. Nous portons notre attention à la ré-identification qui est un risque précis lié à la préservation de la vie privée dans les données de mobilité. Il consiste en a un attaquant qui des lors qu'il reçoit une trace de mobilité anonymisée, il cherche à retrouver l'identifiant de son propriétaire en la rattachant à un passif de traces non-anonymisées des utilisateurs du système. Dans ce cadre, nous proposons tout d'abords des attaques de ré-identification AP-Attack et ILL-Attack servant à mettre en exergue les vulnérabilités des mécanismes de protections de l'état de l'art et de quantifier leur efficacité. Nous proposons aussi un nouveau mécanisme de protection HMC qui utilise des heat maps afin de guider la transformation du comportement d'un individu pour qu'il ne ressemble plus au soi du passée mais à un autre utilisateur, le préservant ainsi de la ré-identification. Cet modification de la trace de mobilité est contrainte par des mesures d'utilité des données afin de minimiser la qualité de service ou les conclusions que l'on peut tirer à l'aide de ces données

    Préservation contre les attaques de ré-identification sur des données de mobilité

    No full text
    With the wide propagation of handheld devices, more and more mobile sensors are being used by end users on a daily basis. Those sensors could be leveraged to gather useful mobility data for city planners, business analysts and researches. However, gathering and exploiting mobility data raises many privacy threats. Sensitive information such as one’s home or workplace, hobbies, religious beliefs, political or sexual preferences can be inferred from the gathered data. In the last decade, Location Privacy Protection Mechanisms (LPPMs) have been proposed to protect user data privacy. They alter data mobility to enforce formal guarantees (e.g., k-anonymity or differential privacy), hide sensitive information (e.g., erase points of interests) or act as countermeasures for particular attacks. In this thesis, we focus on the threat of re-identification which aims at re-linking an anonymous mobility trace to the know past mobility of its user. First, we propose re-identification attacks (AP-Attack and ILL-Attack) that find vulnerabilities and stress current state-of-the-art LPPMs to quantify their effectiveness. We also propose a new protection mechanism HMC that uses heat maps to guide the transformation of mobility data to change the behaviour of a user, in order to make her look similar to someone else rather than her past self which preserves her from re-identification attacks. This alteration of mobility trace is constrained with the control of the utility of the data to minimize the distortion in the quality of the analysis realized on this data.De nos jours, avec la large propagation de différents appareils mobiles, de nombreux capteurs accompagnent des utilisateurs. Ces capteurs peuvent servir à collecter des données de mobilité qui sont utiles pour des urbanistes ou des chercheurs. Cependant, l'exploitation de ces données soulèvent de nombreuses menaces quant à la préservation de la vie privée des utilisateurs. En effet, des informations sensibles tel que le lieu domicile, le lieu de travail ou même les croyances religieuses peuvent être inférées de ces données. Durant la dernière décennie, des mécanismes de protections appelées "Location Privacy Protection Mechanisms (LPPM)" ont été proposé. Ils imposent des guarenties sur les données (e.g., k-anonymity ou differential privacy), obfusquent les informations sensibles (e.g., efface les points d'intéret) ou sont une contre-mesure à des attaques particulières. Nous portons notre attention à la ré-identification qui est un risque précis lié à la préservation de la vie privée dans les données de mobilité. Il consiste en a un attaquant qui des lors qu'il reçoit une trace de mobilité anonymisée, il cherche à retrouver l'identifiant de son propriétaire en la rattachant à un passif de traces non-anonymisées des utilisateurs du système. Dans ce cadre, nous proposons tout d'abords des attaques de ré-identification AP-Attack et ILL-Attack servant à mettre en exergue les vulnérabilités des mécanismes de protections de l'état de l'art et de quantifier leur efficacité. Nous proposons aussi un nouveau mécanisme de protection HMC qui utilise des heat maps afin de guider la transformation du comportement d'un individu pour qu'il ne ressemble plus au soi du passée mais à un autre utilisateur, le préservant ainsi de la ré-identification. Cet modification de la trace de mobilité est contrainte par des mesures d'utilité des données afin de minimiser la qualité de service ou les conclusions que l'on peut tirer à l'aide de ces données

    Evolutionist approach and MFCC modeling for Arabic automatic recognition

    No full text
    In this article, we suggest a system for automatic recognition of isolated Arabic words, it is a multi-speaker system, even independent of speaker and robust in a noisy environment, it uses a genetic algorithm for recognition, and the Mel frequency cepstral coefficients (MFCC) to modelise the speech signal, it was implemented with the matlab7 platform language. A new mutation method (injection mutation) is proposed and used in the genetic algorithm. To evaluate the performance of the system, we have made an oral corpus that represents the most Arabic language characteristics, it could be used by other researchers to test and validate their systems working on Arabic language.Keywords: Automatic speech recognition, genetic algorithm, Arabic language, Mel frequency cepstral coefficients (MFCC), language corpora
    corecore